-
-
Notifications
You must be signed in to change notification settings - Fork 8.9k
[Frontend] Add chunked processing to handle long inputs in embedding models #20837
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Frontend] Add chunked processing to handle long inputs in embedding models #20837
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @x22x22, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a significant enhancement to vLLM's embedding capabilities by adding support for chunked processing of long text inputs. This feature directly addresses issues like CUDA crashes and memory constraints when processing text longer than a model's max_model_len
, allowing vLLM to generate embeddings for arbitrarily long documents without requiring changes to the underlying models. The solution is implemented at the serving layer, providing a configurable and robust mechanism for handling large inputs while maintaining embedding quality through smart aggregation.
Highlights
- New Feature: Chunked Processing for Embedding Models: Introduced automatic chunked processing at the serving layer for embedding models. This enables vLLM to handle text inputs that exceed the model's maximum context length by splitting them into manageable chunks, processing each independently, and aggregating the results.
- Configuration and Activation: The chunked processing feature is configurable via the
PoolerConfig
by settingenable_chunked_processing: true
. It automatically detects when input exceedsmax_model_len
and triggers the chunking logic. - Intelligent Aggregation: Implemented a FastChat-style weighted averaging algorithm to combine embeddings from multiple chunks. This method uses token counts as weights, ensuring that longer chunks contribute proportionally more to the final aggregated embedding, preserving semantic quality.
- Backward Compatibility and Model Support: The implementation maintains backward compatibility for short text inputs and requires zero modifications to existing model code. Initially,
intfloat/multilingual-e5-large
is explicitly supported, with an extensible architecture for other embedding models. - Documentation and Examples: Added comprehensive documentation detailing the feature, its configuration, how it works, performance characteristics, and limitations. New example scripts (server and client) are provided to demonstrate how to configure and utilize chunking processing for long text embeddings.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a valuable feature for handling long text embeddings by implementing chunked processing. The code is generally well-written, and the documentation and examples are thorough. I've identified a critical bug in the batch processing logic that could lead to incorrect results when multiple long prompts are sent in a single request. I've also provided several suggestions to improve code quality, maintainability, and performance. Once the critical issue is addressed, this will be a great addition to the project.
…g, and update relevant documentation and examples. New example scripts and service startup scripts are added to demonstrate how to configure and utilize chunking processing. Update the model configuration to support long - text processing and implement the chunking processing logic in the code. Signed-off-by: x22x22 <wadeking@qq.com>
b5f245d
to
5398bbd
Compare
… with isort, and ensure the accuracy of docstrings. Signed-off-by: x22x22 <wadeking@qq.com>
…ompts, and improve the implementation of chunk processing to ensure accuracy and efficiency when handling long texts. Meanwhile, relevant type annotations have been updated to enhance code readability and type safety. Signed-off-by: x22x22 <wadeking@qq.com>
…ess of block IDs and fix the block ID conflicts in batch processing. Updated relevant examples to demonstrate the new features. Signed-off-by: x22x22 <wadeking@qq.com>
…ess of block IDs and fix the block ID conflicts in batch processing. Updated relevant examples to demonstrate the new features. Signed-off-by: x22x22 <wadeking@qq.com>
…f the "Slow Processing" section from 1 to 3 to ensure the accuracy and consistency of the list. Signed-off-by: x22x22 <wadeking@qq.com>
…_CODE to enhance the flexibility of the model name, and use this variable to replace the hard - coded model name in the output information. Ensure that the configuration during service startup is more consistent and maintainable. Signed-off-by: x22x22 <wadeking@qq.com>
…verify the uniqueness of block IDs and resolve the block ID conflict issues in batch processing. Meanwhile, relevant documents and examples have been updated to ensure the accuracy and consistency of long - text processing. Signed-off-by: x22x22 <wadeking@qq.com>
In fact, embedding models are not very suitable for handling extremely long inputs, as too much content can lead to embeddings that are not able to effectively distinguish between similar content. Here's a simple way to confirm that automatic chunked processing is working effectively: Reference mteb_test_embed_models in vllm/tests/models/language/pooling Keeping only the very front part of long context, such as 2048 or even 512, is an extremely high baseline.
Do the following three comparative experiments
If automatic chunked processing using multilingual-e5-large on mteb/T2Reranking dataset(or any test with a context exceeding 8K), can achieve comparable results indicates that automatic chunked processing is effective |
@noooop I've manually tested using text chunks exceeding 1,000 tokens in vector databases, and confirmed that short user queries or task descriptions (~100 tokens) can successfully retrieve relevant text fragments. While this verification isn't scientifically rigorous, it demonstrates a viable practical solution. I'll allocate time later to run the benchmark tests you recommended - appreciate the suggestion. |
After some investigation, intfloat/multilingual-e5-large uses the classic BERT architecture with a context length of 512, which appears very weak in 2025. Please perform a comparative test using jina-embeddings-v3, which has a maximum context length of 8192 and uses mean pooling. Unless you use VLLM_ALLOW_LONG_MAX_MODEL_LEN or similar, you Should Not Allow set the context of intfloat/multilingual-e5-large beyond 512, as it will exceed position_embeddings and cause an out-of-bounds error. It is not a bug. Please weaken or remove the content related to CUDA crashes. |
@noooop The purpose is to enable models like While this approach may not deliver optimal embedding performance, it provides a practical low-cost solution for RAG scenarios requiring simultaneous processing of both short and long texts. Crucially, no performance penalty occurs when input stays within a model's native context limit (e.g. ≤512 for E5, ≤8192 for Jina), as no special chunking gets triggered. Would you be open to continuing this discussion more efficiently via https://slack.vllm.ai? I've requested access to the Slack workspace but haven't received approval yet - perhaps we could connect there once I'm onboarded. |
I looked through the code carefully. You can add a new parameter such as max_embed_len, but do not modify any code related to max_model_len, That will cause a huge number of bugs. And do not use VLLM_ALLOW_LONG_MAX_MODEL_LEN. I think we should remove VLLM_ALLOW_LONG_MAX_MODEL_LEN. I can’t think of any use case that would require this flag. |
I've committed my modifications to https://github.com/x22x22/LongEmbed/tree/feature/add-openai-embedding-support, please pull the branch. # You can skip installing flash-attn and other dependencies, as we mainly need mteb, openai>=1.0.0, and tiktoken
pip install -r requirements.txt
# Modify the BASE_URL and API_KEY in scripts/run_openai_long_embed.sh
/bin/bash scripts/run_openai_long_embed.sh After evaluation is complete, the results will be output to ./results. I submitted this PR hoping to find a universal context extension approach for embedding models without intrusive modifications to the model source code - that's the advantage of this method. Of course, I understand this doesn't conflict with the optimization methods mentioned in the LongEmbeds paper. We could also create another PR that targets different models by modifying their source code to extend embedding model context. |
@x22x22 , I implemented the RP, GP, and PI methods from the paper but couldn't get good results. For e5-multilingual-large, this is the typical output I'm getting with these context extension methods:
Up to 1024 and 2048 they seem to work and then the performance just drops. On the other hand I was able to reproduce your results with your branch, which is always good. Can you also run the benchmark for models with CLS or LAST pooling? |
e5-multilingual-large uses mean pooling "pooling_mode_mean_tokens": true, in https://huggingface.co/intfloat/multilingual-e5-large/blob/main/1_Pooling/config.json Model inference using a different pooling method than training has very poor results The If you want to experiment whether this method is compatible with cls pooling. |
Thanks, @noooop, I meant adding tests results for other models with different pooling methods, not e5-large-multilingual as forcing another pooling method on that one would lead to poor results, as you pointed out. |
@maxdebayser |
Based on the evaluation results, here are the findings for CLS and LAST pooling: The evaluation results for CLS and LAST pooling are as follows:
Since Qwen3-Embedding-0.6B natively supports 32k length and the LongEmbed project's longest evaluation dataset is 32k, automatic chunking processing cannot be triggered if the length doesn't exceed 32k. Therefore, several extended length evaluation datasets were created based on the following logic:
It's important to note that the evaluation dataset lengths in LongEmbed refer to string length, not token length. Therefore, only evaluation datasets of 36864 length and above truly contain content with token lengths exceeding 32k. Focus should be placed on evaluation scores for lengths greater than or equal to 36864. |
In fact, I am more concerned about this data. Please use
I even feel that the automatic chunked processing extension may perform better than the original context expansion in some scenarios, more than just speed up(more than performance vs speed trade off). |
@noooop |
I think automatic chunked processing is a method of context expansion for embed. |
max_embed_len = 8K means we can only test datasets up to a maximum of 8k. So I only need to provide you with evaluation results for 512~8k, is that correct? |
jina-embeddings-v3 and BAAI/bge-m3 natively only support 8k The computational complexity of self-atten is O(n^2), so computing 16 blocks of 512 should be faster than computing one block of 8K. |
This pull request has merge conflicts that must be resolved before it can be |
@x22x22 , thanks for the result. This is pretty cool, it feels like we're writing a paper. Here is a model with 512 context and CLS-based pooling: ibm-granite/granite-embedding-278m-multilingual. I haven't found one with LAST pooling yet. My interest in testing this is that I think that regardless of the pooling type of the underlying model, we should see an extension with MEAN-based chunking. Using the CLS or LAST strategy on the chunks doesn't seem to make sense to me because then we're just throwing away chunks. It would be the same as just truncating the input left or right and sending a single chunk of tokens through the mode. |
…rocessing function, and elaborate on the processing methods and performance characteristics of different pooling types (MEAN, CLS, LAST). Optimize the configuration parameters to ensure that users receive clear warning messages when using non - MEAN pooling, and enhance the support for long - text input. Signed-off-by: x22x22 <wadeking@qq.com>
Signed-off-by: x22x22 <wadeking@qq.com>
https://x22x22.github.io/embedding_models_comprehensive_evaluation.html |
@maxdebayser buildkite/fastcheck/pr/docker-build-image has failed to run. It seems that the operation timed out. Could you assist in rerunning it? |
… ensure consistency between the task and the model configuration. In case the validation fails, return an error response. Signed-off-by: x22x22 <wadeking@qq.com>
My code has been submitted and all CI tests have passed. What else needs to be done to merge my PR? Thanks. |
Thank you for the thorough testing; Native does indeed perform better. |
Yes, when staying within the model's native length, the automatic chunked processing mechanism shows no advantage in terms of recall metrics alone. However, looking back at cases where the input exceeds the native length by 1-2 times, there is indeed an advantage in recall rates. This improvement appears almost exclusively in embedding models that use MEAN pooling strategies. ![]() Therefore, I still recommend not modifying max_length to trigger forced automatic chunked processing. |
@x22x22 , thanks for the comprehensive tests. Actually I think that we shouldn't touch the native pooling type while testing, because changing from MEAN to CLS for example is going to perform poorly. Here are three models that we could test that have the 3 pooling types and shortish contexts: LAST: BAAI/bge-multilingual-gemma2, context 8k If we could test each model with the 3 different aggregations I think that would show that mean aggregation always performs better (without changing native pooling type).
Let's say that depending on the result mean aggregation always performs better. In this case, the code can be simplified a lot. |
Hi @maxdebayser , Thank you for the detailed feedback! I want to make sure I understand your suggestions correctly: For Testing:
And for each model, test all 3 aggregation strategies (MEAN, LAST, CLS) to compare their performance, while keeping each model's native pooling type unchanged. For Code Simplification:
Next Steps: Please let me know if my understanding is correct, and I'll proceed with implementing the tests accordingly. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope this work is accepted.
So next,
- we should organize the test results and inform users which approaches are best practice.
- Those combinations that perform poorly should be avoided,
- and those with potential issues need to be restricted at the code level.
LongEmbed is likely the most challenging benchmark. In other tests, the method performs notably better.
Thank you for the approval! I have a few questions about the next steps you mentioned: 1. Best Practices Documentation: 2. Code-level Restrictions:
My preference would be to:
This approach would help avoid merge conflicts and make the review process more manageable. What are your thoughts on this approach? |
else: | ||
# Fall back to max_model_len validation (original behavior) | ||
effective_max_len = self.max_model_len | ||
validation_error_msg = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message is the same, only the variables change. You can use string.format() instead of f-strings
These tests will help us understand which parts of the code are actually needed. Once we know that I think there will be opportunities for refactoring and reducing the number of lines of code by a good amount. |
…g, and update relevant documentation and examples. New example scripts and service startup scripts are added to demonstrate how to configure and utilize chunking processing. Update the model configuration to support long - text processing and implement the chunking processing logic in the code.
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Add chunked processing support for long text embeddings to resolve CUDA crashes when input text exceeds model's maximum context length.
Problem Solved
max_model_len
Solution
This PR implements automatic chunked processing at the serving layer that:
Key Features
enable_chunked_processing: true
in pooler configSupported Models
intfloat/multilingual-e5-large
(initially)This enables vLLM to handle embedding requests of any length without crashes, significantly expanding its utility for RAG applications and long document processing.
Test Plan
Long Text Embedding with Chunked Processing
Test Result
Before modification
After modification
(Optional) Documentation Update